understandable ai
Pinaki Laskar on LinkedIn: #selfdrivingcars #machinelearning #deeplearning
There are two fundamental reasons: To deeply understand the world for machine intelligence and learning and human intelligence replacing statistical independence with causal world model; To create a deep understandable AI instead of the Explainable AI; To build the Meta-disciplinary AI (Meta-AI) following the structural algorithm: Transdisciplinary AI (Trans-AI) the World Hypergraph Data Ontology AI Models ML/Deep Neural Networks Human Intelligence; First, It takes a human about 20–30 hours to learn how to drive a car, while it takes tens of thousands of hours to train a neuralnetwork to achieve this same capability. Even after all of these years of training and despite of using the latest and greatest in processing and sensor technology, #selfdrivingcars are still not deemed road-safe. We can train our learning model to recognize many of these situations, but there is an infinite number of them and even after millions of miles driven, the #machinelearning model will not have experienced anywhere near all of them. Because #deeplearning models do not have an inherent understanding of how the world works. They do not know any laws of physics and neither do they know ethics or even liability laws.
- Information Technology > Artificial Intelligence > Cognitive Science (0.86)
- Information Technology > Communications > Social Media (0.85)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.62)
Forget Explainable AI! What We Need Now is Understandable AI
As artificial intelligence technologies have gained popularity, creators and users have used and modified a variety of terms and phrases to describe and characterize their work. One may probably have heard terms like "narrow AI", "deep learning", "neural networks", and other new descriptors being used to distinguish between distinct types and roles of AI, different sections of AI solutions, and so on. Another result of the rapid expansion of AI's availability and use has been the demand for "explainable AI," or approaches and procedures that make Artificial Intelligence more accessible to all types of humans. And, while explainable AI is important, it's not really the answer always. The ambition to understand how computers make decisions is admirable, but XAI technologies or approaches will never be sufficient.
Pinaki Laskar on LinkedIn: #AI #neuralnetworks #selfdrivingcars
AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Why we should shift our focus to "Understandable AI" from Explainable AI? There are two simple but fundamental reasons, - To deeply understand the world for machine intelligence and learning and human intelligence replacing statistical independence with causal world model. First, It takes a human about 20–30 hours to learn how to drive a car, while it takes tens of thousands of hours to train #neuralnetworks to achieve this same capability. Even after all of these years of training and despite of using the latest and greatest in processing and sensor technology, #selfdrivingcars are still not deemed road-safe. We can train our learning model to recognize many of these situations, but there is an infinite number of them and even after millions of miles driven, the machine learning model will not have experienced anywhere near all of them.
Interview With Megan J. Browning Kvamme, CEO at FactGem
AI will have a growing impact on many aspects of society in the years to come and we need to make sure we are thoughtful and deliberate as we think through the ethical impacts of these changes. I started my career in investment banking. As part of the due diligence process, you need to sift through large amounts of data, generally housed in different silos, and in different formats. You end up spending an inordinate amount of time getting the right data, from the right place, and into the right format to be useful. You would much rather have that time to focus on other priorities.
Is Explainability Enough? Why We Need Understandable AI
Artificial Intelligence is quickly becoming ubiquitous in personal and professional lives in ways we both observe and others we don't see as readily. Artificial Intelligence is used to influence life-changing decisions, such as whether or not you get hired to that dream job, who you will date, and whether or not you'll be approved for a loan for your first home. However, we have little insight into how critical decisions are made with AI. As a result, there is increasing demand (and legislation) to ensure the influence of these technologies is understood. What is it we seek when we ask for explainability in AI, as in the GDPR's Article 22? Explainable by whom and to whom?